2,653 research outputs found

    A-priori Validation of Subgrid-scale Models for Astrophysical Turbulence

    Full text link
    We perform a-priori validation tests of subgrid-scale (SGS) models for the turbulent transport of momentum, energy and passive scalars. To this end, we conduct two sets of high-resolution hydrodynamical simulations with a Lagrangian code: an isothermal turbulent box with rms Mach number of 0.3, 2 and 8, and the classical wind tunnel where a cold cloud traveling through a hot medium gradually dissolves due to fluid instabilities. Two SGS models are examined: the eddy diffusivity (ED) model wildly adopted in astrophysical simulations and the "gradient model" due to Clark et al. (1979). We find that both models predict the magnitude of the SGS terms equally well (correlation coefficient > 0.8). However, the gradient model provides excellent predictions on the orientation and shape of the SGS terms while the ED model predicts poorly on both, indicating that isotropic diffusion is a poor approximation of the instantaneous turbulent transport. The best-fit coefficient of the gradient model is in the range of [0.16, 0.21] for the momentum transport, and the turbulent Schmidt number and Prandtl number are both close to unity, in the range of [0.92, 1.15].Comment: ApJ accepted; analysis code available at https://github.com/huchiayu/Lapriori.j

    A Semi-Automated Approach to Medical Image Segmentation using Conditional Random Field Inference

    Full text link
    Medical image segmentation plays a crucial role in delivering effective patient care in various diagnostic and treatment modalities. Manual delineation of target volumes and all critical structures is a very tedious and highly time-consuming process and introduce uncertainties of treatment outcomes of patients. Fully automatic methods holds great promise for reducing cost and time, while at the same time improving accuracy and eliminating expert variability, yet there are still great challenges. Legally and ethically, human oversight must be integrated with ”smart tools” favoring a semi-automatic technique which can leverage the best aspects of both human and computer. In this work we show that we can formulate a semi-automatic framework for the segmentation problem by formulating it as an energy minimization problem in Conditional Random Field (CRF). We show that human input can be used as adaptive training data to condition a probabilistic boundary term modeled for the heterogeneous boundary characteristics of anatomical structures. We demonstrated that our method can effortlessly adapt to multiple structures and image modalities using a single CRF framework and tools to learn probabilistic terms interactively. To tackle a more difficult multi-class segmentation problem, we developed a new ensemble one-vs-rest graph cut algorithm. Each graph in the ensemble performs a simple and efficient bi-class (a target class vs the rest of the classes) segmentation. The final segmentation is obtained by majority vote. Our algorithm is both faster and more accurate when compared with the prior multi-class method which iteratively swaps classes. In this Thesis, we also include novel volumetric segmentation algorithms which employ deep learning and indicate how to synthesize our CRF framework with convolutional neural networks (CNN). This would allow incorporating user guidance into CNN based deep learning for this task. We think a deep learning based method interactively guided by human expert is the ideal solution for medical image segmentation

    Instance Neural Radiance Field

    Full text link
    This paper presents one of the first learning-based NeRF 3D instance segmentation pipelines, dubbed as Instance Neural Radiance Field, or Instance NeRF. Taking a NeRF pretrained from multi-view RGB images as input, Instance NeRF can learn 3D instance segmentation of a given scene, represented as an instance field component of the NeRF model. To this end, we adopt a 3D proposal-based mask prediction network on the sampled volumetric features from NeRF, which generates discrete 3D instance masks. The coarse 3D mask prediction is then projected to image space to match 2D segmentation masks from different views generated by existing panoptic segmentation models, which are used to supervise the training of the instance field. Notably, beyond generating consistent 2D segmentation maps from novel views, Instance NeRF can query instance information at any 3D point, which greatly enhances NeRF object segmentation and manipulation. Our method is also one of the first to achieve such results without ground-truth instance information during inference. Experimented on synthetic and real-world NeRF datasets with complex indoor scenes, Instance NeRF surpasses previous NeRF segmentation works and competitive 2D segmentation methods in segmentation performance on unseen views. See the demo video at https://youtu.be/wW9Bme73coI
    • …
    corecore